But suppose, as is perhaps more typical, something stimulates each eye in corresponding places. Let us say the left eye received the stimulus XXAXXBXX and the right eye received the stimulus XXAXXXBX. If the observer fixates A, there is the possibility of fusing the left eye’s B with the right eye’s X and the left eye’s X with the right eye’s B, since these fall on corresponding points. But that will not happen: The left eye’s A and B will be fused with the right eye’s A and B, because of the similarity between them. Apparently, the perceptual system scans the two images and decides, on the basis of similarity, which units in each most probably correspond, the implication being that those that correspond are produced by the same contours in the outer world. (David Marr and Tomaso Poggio developed an algorithm, based on certain "assumptions" about the outer scene, that would enable a computer or the brain to solve just such a correspondence problem.) Once this scanning takes place, the perceptual system can evaluate the disparity in terms of depth. If B is relatively near to A in the left eye’s image but is farther from A in the right eye’s image, then it follows that B is an object behind the plane of A. A process similar to reasoning must occur in arriving at the depth interpretation. If some agency of mind has available to it the sensory information reaching the two eyes, and if it "knows" in which eye each retinal stimulus originates, it can compute depth. A process of this kind would render understandable stereoscopic depth constancy. For example, if the perceptual system "knows" about diminished disparity as a function of distance, it can take distance into account in inferring the magnitude of depth.